2,591 research outputs found

    Stellar Photometry and Astrometry with Discrete Point Spread Functions

    Full text link
    The key features of the MATPHOT algorithm for precise and accurate stellar photometry and astrometry using discrete Point Spread Functions are described. A discrete Point Spread Function (PSF) is a sampled version of a continuous PSF which describes the two-dimensional probability distribution of photons from a point source (star) just above the detector. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. Discrete PSFs are shifted within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. Precise and accurate stellar photometry and astrometry is achieved with undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. The precision and accuracy of the MATPHOT algorithm is demonstrated by using the C-language MPD code to analyze simulated CCD stellar observations; measured performance is compared with a theoretical performance model. Detailed analysis of simulated Next Generation Space Telescope observations demonstrate that millipixel relative astrometry and millimag photometric precision is achievable with complicated space-based discrete PSFs. For further information about MATPHOT and MPD, including source code and documentation, see http://www.noao.edu/staff/mighell/matphotComment: 19 pages, 22 figures, accepted for publication in MNRA

    Graphics for uncertainty

    Get PDF
    Graphical methods such as colour shading and animation, which are widely available, can be very effective in communicating uncertainty. In particular, the idea of a ‘density strip’ provides a conceptually simple representation of a distribution and this is explored in a variety of settings, including a comparison of means, regression and models for contingency tables. Animation is also a very useful device for exploring uncertainty and this is explored particularly in the context of flexible models, expressed in curves and surfaces whose structure is of particular interest. Animation can further provide a helpful mechanism for exploring data in several dimensions. This is explored in the simple but very important setting of spatiotemporal data

    A scheduling theory framework for GPU tasks efficient execution

    Get PDF
    Concurrent execution of tasks in GPUs can reduce the computation time of a workload by overlapping data transfer and execution commands. However it is difficult to implement an efficient run- time scheduler that minimizes the workload makespan as many execution orderings should be evaluated. In this paper, we employ scheduling theory to build a model that takes into account the device capabili- ties, workload characteristics, constraints and objec- tive functions. In our model, GPU tasks schedul- ing is reformulated as a flow shop scheduling prob- lem, which allow us to apply and compare well known methods already developed in the operations research field. In addition we develop a new heuristic, specif- ically focused on executing GPU commands, that achieves better scheduling results than previous tech- niques. Finally, a comprehensive evaluation, showing the suitability and robustness of this new approach, is conducted in three different NVIDIA architectures (Kepler, Maxwell and Pascal).Proyecto TIN2016- 0920R, Universidad de Málaga (Campus de Excelencia Internacional Andalucía Tech) y programa de donación de NVIDIA Corporation

    Spatial regression and spillover effects in cluster randomized trials with count outcomes.

    Get PDF
    This paper describes methodology for analyzing data from cluster randomized trials with count outcomes, taking indirect effects as well spatial effects into account. Indirect effects are modeled using a novel application of a measure of depth within the intervention arm. Both direct and indirect effects can be estimated accurately even when the proposed model is misspecified. We use spatial regression models with Gaussian random effects, where the individual outcomes have distributions overdispersed with respect to the Poisson, and the corresponding direct and indirect effects have a marginal interpretation. To avoid spatial confounding, we use orthogonal regression, in which random effects represent spatial dependence using a homoscedastic and dimensionally reduced modification of the intrinsic conditional autoregression model. We illustrate the methodology using spatial data from a pair-matched cluster randomized trial against the dengue mosquito vector Aedes aegypti, done in Trujillo, Venezuela

    An intelligent assistant for exploratory data analysis

    Get PDF
    In this paper we present an account of the main features of SNOUT, an intelligent assistant for exploratory data analysis (EDA) of social science survey data that incorporates a range of data mining techniques. EDA has much in common with existing data mining techniques: its main objective is to help an investigator reach an understanding of the important relationships ina data set rather than simply develop predictive models for selectd variables. Brief descriptions of a number of novel techniques developed for use in SNOUT are presented. These include heuristic variable level inference and classification, automatic category formation, the use of similarity trees to identify groups of related variables, interactive decision tree construction and model selection using a genetic algorithm

    Reconstructing the primordial power spectrum from the CMB

    Full text link
    We propose a straightforward and model independent methodology for characterizing the sensitivity of CMB and other experiments to wiggles, irregularities, and features in the primordial power spectrum. Assuming that the primordial cosmological perturbations are adiabatic, we present a function space generalization of the usual Fisher matrix formalism, applied to a CMB experiment resembling Planck with and without ancillary data. This work is closely related to other work on recovering the inflationary potential and exploring specific models of non-minimal, or perhaps baroque, primordial power spectra. The approach adopted here, however, most directly expresses what the data is really telling us. We explore in detail the structure of the available information and quantify exactly what features can be reconstructed and at what statistical significance.Comment: 43 pages Revtex, 23 figure

    Further Investigation of the Time Delay, Magnification Ratios, and Variability in the Gravitational Lens 0218+357

    Get PDF
    High precision VLA flux density measurements for the lensed images of 0218+357 yield a time delay of 10.1(+1.5-1.6)days (95% confidence). This is consistent with independent measurements carried out at the same epoch (Biggs et al. 1999), lending confidence in the robustness of the time delay measurement. However, since both measurements make use of the same features in the light curves, it is possible that the effects of unmodelled processes, such as scintillation or microlensing, are biasing both time delay measurements in the same way. Our time delay estimates result in confidence intervals that are somewhat larger than those of Biggs et al., probably because we adopt a more general model of the source variability, allowing for constant and variable components. When considered in relation to the lens mass model of Biggs et al., our best-fit time delay implies a Hubble constant of H_o = 71(+17-23) km/s-Mpc for Omega_o=1 and lambda_o=0 (95% confidence; filled beam). This confidence interval for H_o does not reflect systematic error, which may be substantial, due to uncertainty in the position of the lens galaxy. We also measure the flux ratio of the variable components of 0218+357, a measurement of a small region that should more closely represent the true lens magnification ratio. We find ratios of 3.2(+0.3-0.4) (95% confidence; 8 GHz) and 4.3(+0.5-0.8) (15 GHz). Unlike the reported flux ratios on scales of 0.1", these ratios are not strongly significantly different. We investigate the significance of apparent differences in the variability properties of the two images of the background active galactic nucleus. We conclude that the differences are not significant, and that time series much longer than our 100-day time series will be required to investigate propagation effects in this way.Comment: 33 pages, 9 figures. Accepted for publication in ApJ. Light curve data may be found at http://space.mit.edu/RADIO/papers.htm

    Finite-Size Scaling in the Energy-Entropy Plane for the 2D +- J Ising Spin Glass

    Full text link
    For L×LL \times L square lattices with L20L \le 20 the 2D Ising spin glass with +1 and -1 bonds is found to have a strong correlation between the energy and the entropy of its ground states. A fit to the data gives the result that each additional broken bond in the ground state of a particular sample of random bonds increases the ground state degeneracy by approximately a factor of 10/3. For x=0.5x = 0.5 (where xx is the fraction of negative bonds), over this range of LL, the characteristic entropy defined by the energy-entropy correlation scales with size as L1.78(2)L^{1.78(2)}. Anomalous scaling is not found for the characteristic energy, which essentially scales as L2L^2. When x=0.25x= 0.25, a crossover to L2L^2 scaling of the entropy is seen near L=12L = 12. The results found here suggest a natural mechanism for the unusual behavior of the low temperature specific heat of this model, and illustrate the dangers of extrapolating from small LL.Comment: 9 pages, two-column format; to appear in J. Statistical Physic
    corecore